160 research outputs found

    Bottom-up retinotopic organization supports top-down mental imagery

    Get PDF
    Finding a path between locations is a routine task in daily life. Mental navigation is often used to plan a route to a destination that is not visible from the current location. We first used functional magnetic resonance imaging (fMRI) and surface-based averaging methods to find high-level brain regions involved in imagined navigation between locations in a building very familiar to each participant. This revealed a mental navigation network that includes the precuneus, retrosplenial cortex (RSC), parahippocampal place area (PPA), occipital place area (OPA), supplementary motor area (SMA), premotor cortex, and areas along the medial and anterior intraparietal sulcus. We then visualized retinotopic maps in the entire cortex using wide-field, natural scene stimuli in a separate set of fMRI experiments. This revealed five distinct visual streams or ‘fingers’ that extend anteriorly into middle temporal, superior parietal, medial parietal, retrosplenial and ventral occipitotemporal cortex. By using spherical morphing to overlap these two data sets, we showed that the mental navigation network primarily occupies areas that also contain retinotopic maps. Specifically, scene-selective regions RSC, PPA and OPA have a common emphasis on the far periphery of the upper visual field. These results suggest that bottom-up retinotopic organization may help to efficiently encode scene and location information in an eye-centered reference frame for top-down, internally generated mental navigation. This study pushes the border of visual cortex further anterior than was initially expected

    Multiple parietal reach regions in humans: cortical representations for visual and proprioceptive feedback during on-line reaching

    Get PDF
    Reaching toward a visual target involves at least two sources of information. One is the visual feedback from the hand as it approaches the target. Another is proprioception from the moving limb, which informs the brain of the location of the hand relative to the target even when the hand is not visible. Where these two sources of information are represented in the human brain is unknown. In the present study, we investigated the cortical representations for reaching with or without visual feedback from the moving hand, using functional magnetic resonance imaging. To identify reach-dominant areas, we compared reaching with saccades. Our results show that a reach-dominant region in the anterior precuneus (aPCu), extending into medial intraparietal sulcus, is equally active in visual and nonvisual reaching. A second region, at the superior end of the parieto-occipital sulcus (sPOS), is more active for visual than for nonvisual reaching. These results suggest that aPCu is a sensorimotor area whose sensory input is primarily proprioceptive, while sPOS is a visuomotor area that receives visual feedback during reaching. In addition to the precuneus, medial, anterior intraparietal, and superior parietal cortex were also activated during both visual and nonvisual reaching, with more anterior areas responding to hand movements only and more posterior areas responding to both hand and eye movements. Our results suggest that cortical networks for reaching are differentially activated depending on the sensory conditions during reaching. This indicates the involvement of multiple parietal reach regions in humans, rather than a single homogenous parietal reach region

    In vivo functional and myeloarchitectonic mapping of human primary auditory areas

    Get PDF
    In contrast to vision, where retinotopic mapping alone can define areal borders, primary auditory areas such as A1 are best delineated by combining in vivo tonotopic mapping with postmortem cyto- or myeloarchitectonics from the same individual. We combined high-resolution (800 μm) quantitative T(1) mapping with phase-encoded tonotopic methods to map primary auditory areas (A1 and R) within the "auditory core" of human volunteers. We first quantitatively characterize the highly myelinated auditory core in terms of shape, area, cortical depth profile, and position, with our data showing considerable correspondence to postmortem myeloarchitectonic studies, both in cross-participant averages and in individuals. The core region contains two "mirror-image" tonotopic maps oriented along the same axis as observed in macaque and owl monkey. We suggest that these two maps within the core are the human analogs of primate auditory areas A1 and R. The core occupies a much smaller portion of tonotopically organized cortex on the superior temporal plane and gyrus than is generally supposed. The multimodal approach to defining the auditory core will facilitate investigations of structure-function relationships, comparative neuroanatomical studies, and promises new biomarkers for diagnosis and clinical studies

    Controversial issues in visual cortex mapping: Extrastriate cortex between areas V2 and MT in human and nonhuman primates

    Get PDF
    The visual cerebral cortex of primates includes a mosaic of anatomically and functionally distinct areas processing visual information. While there is universal agreement about the location, boundaries, and topographic organization of the areas at the earliest stages of visual processing in many primate species, i.e., the primary (V1), secondary (V2), and middle temporal (MT) visual areas, there is still ongoing debate regarding the exact parcellation of cortex located between areas V2 and MT. Several parcellation schemes have been proposed for extrastriate cortex even within the same species. With the exception of V1, V2, and MT, these schemes differ in areal borders, areal location, neighboring relations, number of areas, and nomenclature. As a result, most anatomical and physiological studies of these areas have been carried out following one or another scheme, in the absence of any general agreement. This situation is inevitably hampering our understanding of the function and evolution of these visual areas. The goal of this special issue is to provide a critical review and evaluation of the literature on the most controversial issues regarding the parcellation of extrastriate cortex, to identify the main reasons for the controversy, and to suggest critical future experimental approaches that could lead to a consensus about the anatomical and functional identity of these areas

    Eye position modulates retinotopic responses in early visual areas: a bias for the straight-ahead direction

    Get PDF
    Even though the eyes constantly change position, the location of a stimulus can be accurately represented by a population of neurons with retinotopic receptive fields modulated by eye position gain fields. Recent electrophysiological studies, however, indicate that eye position gain fields may serve an additional function since they have a non-uniform spatial distribution that increases the neural response to stimuli in the straight-ahead direction. We used functional magnetic resonance imaging and a wide-field stimulus display to determine whether gaze modulations in early human visual cortex enhance the blood-oxygenation-level dependent (BOLD) response to stimuli that are straight-ahead. Subjects viewed rotating polar angle wedge stimuli centered straight-ahead or vertically displaced by ±20° eccentricity. Gaze position did not affect the topography of polar phase-angle maps, confirming that coding was retinotopic, but did affect the amplitude of the BOLD response, consistent with a gain field. In agreement with recent electrophysiological studies, BOLD responses in V1 and V2 to a wedge stimulus at a fixed retinal locus decreased when the wedge location in head-centered coordinates was farther from the straight-ahead direction. We conclude that stimulus-evoked BOLD signals are modulated by a systematic, non-uniform distribution of eye-position gain fields

    Retinotopic organization of extrastriate cortex in the owl monkey—dorsal and lateral areas

    Get PDF
    Dense retinotopy data sets were obtained by microelectrode visual receptive field mapping in dorsal and lateral visual cortex of anesthetized owl monkeys. The cortex was then physically flatmounted and stained for myelin or cytochrome oxidase. Retinotopic mapping data were digitized, interpolated to a uniform grid, analyzed using the visual field sign technique—which locally distinguishes mirror image from nonmirror image visual field representations—and correlated with the myelin or cytochrome oxidase patterns. The region between V2 (nonmirror) and MT (nonmirror) contains three areas—DLp (mirror), DLi (nonmirror), and DLa/MTc (mirror). DM (mirror) was thin anteroposteriorly, and its reduced upper field bent somewhat anteriorly away from V2. DI (nonmirror) directly adjoined V2 (nonmirror) and contained only an upper field representation that also adjoined upper field DM (mirror). Retinotopy was used to define area VPP (nonmirror), which adjoins DM anteriorly, area FSTd (mirror), which adjoins MT ventrolaterally, and TP (mirror), which adjoins MT and DLa/MTc dorsoanteriorly. There was additional retinotopic and architectonic evidence for five more subdivisions of dorsal and lateral extrastriate cortex—TA (nonmirror), MSTd (mirror), MSTv (nonmirror), FSTv (nonmirror), and PP (mirror). Our data appear quite similar to data from marmosets, though our field sign-based areal subdivisions are slightly different. The region immediately anterior to the superiorly located central lower visual field V2 varied substantially between individuals, but always contained upper fields immediately touching lower visual field V2. This region appears to vary even more between species. Though we provide a summary diagram, given within- and between-species variation, it should be regarded as a guide to parsing complex retinotopy rather than a literal representation of any individual, or as the only way to agglomerate the complex mosaic of partial upper and lower field, mirror- and nonmirror-image patches into areas

    Origin of symbol-using systems: speech, but not sign, without the semantic urge

    Get PDF
    Natural language—spoken and signed—is a multichannel phenomenon, involving facial and body expression, and voice and visual intonation that is often used in the service of a social urge to communicate meaning. Given that iconicity seems easier and less abstract than making arbitrary connections between sound and meaning, iconicity and gesture have often been invoked in the origin of language alongside the urge to convey meaning. To get a fresh perspective, we critically distinguish the origin of a system capable of evolution from the subsequent evolution that system becomes capable of. Human language arose on a substrate of a system already capable of Darwinian evolution; the genetically supported uniquely human ability to learn a language reflects a key contact point between Darwinian evolution and language. Though implemented in brains generated by DNA symbols coding for protein meaning, the second higher-level symbol-using system of language now operates in a world mostly decoupled from Darwinian evolutionary constraints. Examination of Darwinian evolution of vocal learning in other animals suggests that the initial fixation of a key prerequisite to language into the human genome may actually have required initially side-stepping not only iconicity, but the urge to mean itself. If sign languages came later, they would not have faced this constraint

    Reconstructing neural representations of tactile space

    Get PDF
    Psychophysical experiments have demonstrated large and highly systematic perceptual distortions of tactile space. Such a space can be referred to our experience of the spatial organisation of objects, at representational level, through touch, in analogy with the familiar concept of visual space. We investigated the neural basis of tactile space by analysing activity patterns induced by tactile stimulation of nine points on a 3 × 3 square grid on the hand dorsum using functional magnetic resonance imaging. We used a searchlight approach within pre-defined regions of interests to compute the pairwise Euclidean distances between the activity patterns elicited by tactile stimulation. Then, we used multidimensional scaling to reconstruct tactile space at the neural level and compare it with skin space at the perceptual level. Our reconstructions of the shape of skin space in contralateral primary somatosensory and motor cortices reveal that it is distorted in a way that matches the perceptual shape of skin space. This suggests that early sensorimotor areas critically contribute to the distorted internal representation of tactile space on the hand dorsum

    Mapping the human cortical surface by combining quantitative T(1) with retinotopy

    Get PDF
    We combined quantitative relaxation rate (R1= 1/T1) mapping-to measure local myelination-with fMRI-based retinotopy. Gray-white and pial surfaces were reconstructed and used to sample R1 at different cortical depths. Like myelination, R1 decreased from deeper to superficial layers. R1 decreased passing from V1 and MT, to immediately surrounding areas, then to the angular gyrus. High R1 was correlated across the cortex with convex local curvature so the data was first "de-curved". By overlaying R1 and retinotopic maps, we found that many visual area borders were associated with significant R1 increases including V1, V3A, MT, V6, V6A, V8/VO1, FST, and VIP. Surprisingly, retinotopic MT occupied only the posterior portion of an oval-shaped lateral occipital R1 maximum. R1 maps were reproducible within individuals and comparable between subjects without intensity normalization, enabling multi-center studies of development, aging, and disease progression, and structure/function mapping in other modalities
    corecore